Likelihood-based deep generative models have recently been shown to exhibit pathological behaviour under the manifold hypothesis as a consequence of using high-dimensional densities to model data with low-dimensional structure. In this paper we propose two methodologies aimed at addressing this problem. Both are based on adding Gaussian noise to the data to remove the dimensionality mismatch during training, and both provide a denoising mechanism whose goal is to sample from the model as though no noise had been added to the data. Our first approach is based on Tweedie's formula, and the second on models which take the variance of added noise as a conditional input. We show that surprisingly, while well motivated, these approaches only sporadically improve performance over not adding noise, and that other methods of addressing the dimensionality mismatch are more empirically adequate.
translated by 谷歌翻译
深度学习在学习高维数据的低维表示方面取得了巨大的成功。如果在感兴趣的数据中没有隐藏的低维结构,那么这一成功将是不可能的。这种存在是由歧管假设提出的,该假设指出数据在于固有维度低的未知流形。在本文中,我们认为该假设无法正确捕获数据中通常存在的低维结构。假设数据在于单个流形意味着整个数据空间的内在维度相同,并且不允许该空间的子区域具有不同数量的变异因素。为了解决这一缺陷,我们提出了多种假设的结合,该假设适应了非恒定固有维度的存在。我们从经验上验证了在常用图像数据集上的这一假设,发现确实应该允许内在维度变化。我们还表明,具有较高内在维度的类更难分类,以及如何使用这种见解来提高分类精度。然后,我们将注意力转移到该假设的影响下,在深层生成模型(DGM)的背景下。当前的大多数DGM都难以建模具有几个连接组件和/或不同固有维度的数据集建模。为了解决这些缺点,我们提出了群集的DGM,首先将数据聚集,然后在每个群集上训练DGM。我们表明,聚类的DGM可以模拟具有不同固有维度的多个连接组件,并在没有增加计算要求的情况下经验优于其非簇的非群体。
translated by 谷歌翻译
在$ \ mathbb {r}^n $中观察到的自然数据通常被限制为$ m $ dimensional歧管$ \ mathcal {m} $,其中$ m <n $。当前的生成模型通过通过神经网络$ f_ \ theta映射$ m $二维潜在变量来表示此流形:\ mathbb {r}^m \ to \ mathbb {r}^n $。我们称之为Pushforward模型的此类过程产生了一个直接的限制:通常不能以单个参数化表示歧管,这意味着尝试这样做的方法将导致计算不稳定性或无法在歧管内学习概率密度。为了解决这个问题,我们建议将$ \ mathcal {m} $建模为神经隐式歧管:神经网络的零零。为了了解$ \ Mathcal {M} $中的数据分布,我们引入了受限的基于能量的模型,该模型使用Langevin Dynamics的约束变体来训练和示例在学习的歧管中。可以用歧管的算术来操纵所得模型,该模型使从业者可以采用工会和模型歧管的交叉点。在有关合成和自然数据的实验中,我们表明,受约束的EBM可以比推送模型更准确地学习具有复杂拓扑的歧管支配分布。
translated by 谷歌翻译
基于似然或显式的深层生成模型使用神经网络来构建灵活的高维密度。该公式直接与歧管假设相矛盾,该假设指出,观察到的数据位于嵌入高维环境空间中的低维歧管上。在本文中,我们研究了在这种维度不匹配的情况下,最大可能的训练的病理。我们正式证明,在学习歧管本身而不是分布的情况下,可以实现堕落的优点,而我们称之为多种歧视的现象过于拟合。我们提出了一类两步程序,该过程包括降低降低步骤,然后进行最大样子密度估计,并证明它们在非参数方面恢复了数据生成分布,从而避免了多种歧视。我们还表明,这些过程能够对隐式模型(例如生成对抗网络)学到的流形进行密度估计,从而解决了这些模型的主要缺点。最近提出的几种方法是我们两步程序的实例。因此,我们统一,扩展和理论上证明了一大批模型。
translated by 谷歌翻译
在加强学习中的技能或低级政策是时间扩展的动作,可以加快学习并实现复杂的行为。离线强化学习和模仿学习的最新工作已经从一系列专家轨迹中提出了几种技能发现的技术。尽管这些方法很有希望,但发现的技能数量始终是固定的超参数,它需要有关环境的先验知识或其他参数搜索来调整它。我们首先提出了一种脱机学习选择(特定技能框架)的方法,以利用变异推理和持续放松方面的进步。然后,我们重点介绍了贝叶斯非参数和离线技能发现之间未开发的连接,并展示如何获得模型的非参数版本。由于经过精心构造的后端具有动态变化数量的选项,因此可以删除该版本,从而消除了指定K。我们还展示了我们的非参数扩展如何在其他技能框架中应用,并在经验上证明我们的方法可以拨款,我们还显示了我们的非参数扩展如何,我们还显示了如何应用我们的非参数扩展名,并显示了我们的非参数扩展如何,因此该版本是可进行的。在各种环境中的最先进的离线技能学习算法。我们的代码可在https://github.com/layer6ai-labs/bnpo上找到。
translated by 谷歌翻译
归一化流量是具有易于易变量的神经网络的可逆性网络,其允许通过最大可能性优化它们的参数来有效地执行。然而,通常假设感兴趣的数据生活在嵌入在高维环境空间中的一些(通常未知)的低维歧管中。结果是自建设中以来的建模不匹配 - 可逆性要求意味着学习分布的高维支持。注射流量,从低到高维空间的映射,旨在通过学习歧管的分布来解决这种差异,但是由此产生的体积变化术语变得更具挑战性。目前方法避免完全使用各种启发式计算该术语,或者假设歧管预先已知,因此不广泛适用。相反,我们提出了两种方法来对模型的参数来促进该术语的梯度,依赖于仔细使用来自数值线性代数的自动分化和技术。两种方法都对将其投射到这种歧管上的数据执行端到端非线性歧管学习和密度估计。我们研究了我们所提出的方法之间的权衡,经验验证我们优于更准确地学习歧管和对应的相应分布忽略音量变化术语的优先级,并显示出对分布外检测的有希望的结果。我们的代码可在https://github.com/layer6ai-labs/rectangular-flows中找到。
translated by 谷歌翻译
Gumbel-Softmax是对单纯形的连续分布,通常用作离散分布的放松。因为它可以轻松解释和容易重新聚集,所以它可以广泛使用。我们提出了一个模块化,更灵活的可重新聚集分布家族,其中高斯噪声通过可逆函数转化为单热近似。这种可逆函数由修改后的软磁性组成,可以结合具有不同特定目的的各种转换。例如,破坏过程使我们能够将重新聚集技巧扩展到具有无数支持的分布,从而使我们沿非参数模型的分布使用或归一化流量使我们提高了分布的灵活性。我们的构造具有比Gumbel-Softmax(例如封闭形式KL)的理论优势,并且在各种实验中都显着优于它。我们的代码可在https://github.com/cunningham-lab/igr上获得。
translated by 谷歌翻译
Non-linear state-space models, also known as general hidden Markov models, are ubiquitous in statistical machine learning, being the most classical generative models for serial data and sequences in general. The particle-based, rapid incremental smoother PaRIS is a sequential Monte Carlo (SMC) technique allowing for efficient online approximation of expectations of additive functionals under the smoothing distribution in these models. Such expectations appear naturally in several learning contexts, such as likelihood estimation (MLE) and Markov score climbing (MSC). PARIS has linear computational complexity, limited memory requirements and comes with non-asymptotic bounds, convergence results and stability guarantees. Still, being based on self-normalised importance sampling, the PaRIS estimator is biased. Our first contribution is to design a novel additive smoothing algorithm, the Parisian particle Gibbs PPG sampler, which can be viewed as a PaRIS algorithm driven by conditional SMC moves, resulting in bias-reduced estimates of the targeted quantities. We substantiate the PPG algorithm with theoretical results, including new bounds on bias and variance as well as deviation inequalities. Our second contribution is to apply PPG in a learning framework, covering MLE and MSC as special examples. In this context, we establish, under standard assumptions, non-asymptotic bounds highlighting the value of bias reduction and the implicit Rao--Blackwellization of PPG. These are the first non-asymptotic results of this kind in this setting. We illustrate our theoretical results with numerical experiments supporting our claims.
translated by 谷歌翻译
In order for artificial neural networks to begin accurately mimicking biological ones, they must be able to adapt to new exigencies without forgetting what they have learned from previous training. Lifelong learning approaches to artificial neural networks attempt to strive towards this goal, yet have not progressed far enough to be realistically deployed for natural language processing tasks. The proverbial roadblock of catastrophic forgetting still gate-keeps researchers from an adequate lifelong learning model. While efforts are being made to quell catastrophic forgetting, there is a lack of research that looks into the importance of class ordering when training on new classes for incremental learning. This is surprising as the ordering of "classes" that humans learn is heavily monitored and incredibly important. While heuristics to develop an ideal class order have been researched, this paper examines class ordering as it relates to priming as a scheme for incremental class learning. By examining the connections between various methods of priming found in humans and how those are mimicked yet remain unexplained in life-long machine learning, this paper provides a better understanding of the similarities between our biological systems and the synthetic systems while simultaneously improving current practices to combat catastrophic forgetting. Through the merging of psychological priming practices with class ordering, this paper is able to identify a generalizable method for class ordering in NLP incremental learning tasks that consistently outperforms random class ordering.
translated by 谷歌翻译
Recent work has shown that fine-tuning large pre-trained language models on a collection of tasks described via instructions, a.k.a. instruction-tuning, improves their zero and few-shot generalization to unseen tasks. However, there is a limited understanding of the performance trade-offs of different decisions made during the instruction-tuning process. These decisions include the scale and diversity of the instruction-tuning benchmark, different task sampling strategies, fine-tuning with and without demonstrations, training using specialized datasets for reasoning and dialogue, and finally, the fine-tuning objectives themselves. In this paper, we characterize the effect of instruction-tuning decisions on downstream task performance when scaling both model and benchmark sizes. To this end, we create OPT-IML Bench: a large benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held-out tasks from seen categories, and to held-out instances from seen tasks. Through the lens of this framework, we first present insights about instruction-tuning decisions as applied to OPT-30B and further exploit these insights to train OPT-IML 30B and 175B, which are instruction-tuned versions of OPT. OPT-IML demonstrates all three generalization abilities at both scales on four different evaluation benchmarks with diverse tasks and input formats -- PromptSource, FLAN, Super-NaturalInstructions, and UnifiedSKG. Not only does it significantly outperform OPT on all benchmarks but is also highly competitive with existing models fine-tuned on each specific benchmark. We release OPT-IML at both scales, together with the OPT-IML Bench evaluation framework.
translated by 谷歌翻译